Goto

Collaborating Authors

 swarm behavior


Discovery and Deployment of Emergent Robot Swarm Behaviors via Representation Learning and Real2Sim2Real Transfer

Mattson, Connor, Raveendra, Varun, Vega, Ricardo, Nowzari, Cameron, Drew, Daniel S., Brown, Daniel S.

arXiv.org Artificial Intelligence

Given a swarm of limited-capability robots, we seek to automatically discover the set of possible emergent behaviors. Prior approaches to behavior discovery rely on human feedback or hand-crafted behavior metrics to represent and evolve behaviors and only discover behaviors in simulation, without testing or considering the deployment of these new behaviors on real robot swarms. In this work, we present Real2Sim2Real Behavior Discovery via Self-Supervised Representation Learning, which combines representation learning and novelty search to discover possible emergent behaviors automatically in simulation and enable direct controller transfer to real robots. First, we evaluate our method in simulation and show that our proposed self-supervised representation learning approach outperforms previous hand-crafted metrics by more accurately representing the space of possible emergent behaviors. Then, we address the reality gap by incorporating recent work in sim2real transfer for swarms into our lightweight simulator design, enabling direct robot deployment of all behaviors discovered in simulation on an open-source and low-cost robot platform.


Swarm-Gen: Fast Generation of Diverse Feasible Swarm Behaviors

Idoko, Simon, Teja, B. Bhanu, Krishna, K. Madhava, Singh, Arun Kumar

arXiv.org Artificial Intelligence

Coordination behavior in robot swarms is inherently multi-modal in nature. That is, there are numerous ways in which a swarm of robots can avoid inter-agent collisions and reach their respective goals. However, the problem of generating diverse and feasible swarm behaviors in a scalable manner remains largely unaddressed. In this paper, we fill this gap by combining generative models with a safety-filter (SF). Specifically, we sample diverse trajectories from a learned generative model which is subsequently projected onto the feasible set using the SF. We experiment with two choices for generative models, namely: Conditional Variational Autoencoder (CVAE) and Vector-Quantized Variational Autoencoder (VQ-VAE). We highlight the trade-offs these two models provide in terms of computation time and trajectory diversity. We develop a custom solver for our SF and equip it with a neural network that predicts context-specific initialization. Thecinitialization network is trained in a self-supervised manner, taking advantage of the differentiability of the SF solver. We provide two sets of empirical results. First, we demonstrate that we can generate a large set of multi-modal, feasible trajectories, simulating diverse swarm behaviors, within a few tens of milliseconds. Second, we show that our initialization network provides faster convergence of our SF solver vis-a-vis other alternative heuristics.


Agent-Based Emulation for Deploying Robot Swarm Behaviors

Vega, Ricardo, Zhu, Kevin, Mattson, Connor, Brown, Daniel S., Nowzari, Cameron

arXiv.org Artificial Intelligence

Despite significant research, robotic swarms have yet to be useful in solving real-world problems, largely due to the difficulty of creating and controlling swarming behaviors in multi-agent systems. Traditional top-down approaches in which a desired emergent behavior is produced often require complex, resource-heavy robots, limiting their practicality. This paper introduces a bottom-up approach by employing an Embodied Agent-Based Modeling and Simulation approach, emphasizing the use of simple robots and identifying conditions that naturally lead to self-organized collective behaviors. Using the Reality-to-Simulation-to-Reality for Swarms (RSRS) process, we tightly integrate real-world experiments with simulations to reproduce known swarm behaviors as well as discovering a novel emergent behavior without aiming to eliminate or even reduce the sim2real gap. This paper presents the development of an Agent-Based Embodiment and Emulation process that balances the importance of running physical swarming experiments and the prohibitively time-consuming process of even setting up and running a single experiment with 20+ robots by leveraging low-fidelity lightweight simulations to enable hypothesis-formation to guide physical experiments. We demonstrate the usefulness of our methods by emulating two known behaviors from the literature and show a third behavior `discovered' by accident.


Performance Prediction of Hub-Based Swarms

Jain, Puneet, Dwivedi, Chaitanya, Bhatt, Vigynesh, Smith, Nick, Goodrich, Michael A

arXiv.org Artificial Intelligence

A hub-based colony consists of multiple agents who share a common nest site called the hub. Agents perform tasks away from the hub like foraging for food or gathering information about future nest sites. Modeling hub-based colonies is challenging because the size of the collective state space grows rapidly as the number of agents grows. This paper presents a graph-based representation of the colony that can be combined with graph-based encoders to create low-dimensional representations of collective state that can scale to many agents for a best-of-N colony problem. We demonstrate how the information in the low-dimensional embedding can be used with two experiments. First, we show how the information in the tensor can be used to cluster collective states by the probability of choosing the best site for a very small problem. Second, we show how structured collective trajectories emerge when a graph encoder is used to learn the low-dimensional embedding, and these trajectories have information that can be used to predict swarm performance.


Innate Motivation for Robot Swarms by Minimizing Surprise: From Simple Simulations to Real-World Experiments

Kaiser, Tanja Katharina, Hamann, Heiko

arXiv.org Artificial Intelligence

Applications of large-scale mobile multi-robot systems can be beneficial over monolithic robots because of higher potential for robustness and scalability. Developing controllers for multi-robot systems is challenging because the multitude of interactions is hard to anticipate and difficult to model. Automatic design using machine learning or evolutionary robotics seem to be options to avoid that challenge, but bring the challenge of designing reward or fitness functions. Generic reward and fitness functions seem unlikely to exist and task-specific rewards often have undesired side effects. Approaches of so-called innate motivation try to avoid the specific formulation of rewards and work instead with different drivers, such as curiosity. Our approach to innate motivation is to minimize surprise, which we implement by maximizing the accuracy of the swarm robot's sensor predictions using neuroevolution. A unique advantage of the swarm robot case is that swarm members populate the robot's environment and can trigger more active behaviors in a self-referential loop. We summarize our previous simulation-based results concerning behavioral diversity, robustness, scalability, and engineered self-organization, and put them into context. In several new studies, we analyze the influence of the optimizer's hyperparameters, the scalability of evolved behaviors, and the impact of realistic robot simulations. Finally, we present results using real robots that show how the reality gap can be bridged.


ROS2swarm - A ROS 2 Package for Swarm Robot Behaviors

Kaiser, Tanja Katharina, Begemann, Marian Johannes, Plattenteich, Tavia, Schilling, Lars, Schildbach, Georg, Hamann, Heiko

arXiv.org Artificial Intelligence

Developing reusable software for mobile robots is still challenging. Even more so for swarm robots, despite the desired simplicity of the robot controllers. Prototyping and experimenting are difficult due to the multi-robot setting and often require robot-robot communication. Also, the diversity of swarm robot hardware platforms increases the need for hardware-independent software concepts. The main advantages of the commonly used robot software architecture ROS 2 are modularity and platform independence. We propose a new ROS 2 package, ROS2swarm, for applications of swarm robotics that provides a library of ready-to-use swarm behavioral primitives. We show the successful application of our approach on three different platforms, the TurtleBot3 Burger, the TurtleBot3 Waffle Pi, and the Jackal UGV, and with a set of different behavioral primitives, such as aggregation, dispersion, and collective decision-making. The proposed approach is easy to maintain, extendable, and has good potential for simplifying swarm robotics experiments in future applications.


Human-guided Swarms: Impedance Control-inspired Influence in Virtual Reality Environments

Barclay, Spencer, Jerath, Kshitij

arXiv.org Artificial Intelligence

As the potential for societal integration of multi-agent robotic systems increases [1], the need to manage the collective behaviors of such systems also increases [2, 3, 4]. There has been significant research effort directed towards the examination of how humans can assist in controlling such collective behaviors, such as in human-swarm interactions [5, 6, 7]. Agent-agent interactions in a swarm of small unmanned aerial systems (sUAS) lead to the emergence of collective behaviors that enable effective coverage and exploration across large spatial extents. However, the same inherent collective behaviors can occasionally limit the ability of the sUAS swarm to focus on specific objects of interest during coverage or exploration missions [8]. In these scenarios, the human operator or supervisor should have the opportunity to fractionally revoke or limit emergent swarm behaviors, and guide the swarm to achieve mission objectives. For most applications, including in industry-and defense-related contexts, such human-swarm interaction (HSI) will likely require intuitive and predictable mechanisms of control to quickly translate the input of the human (such as a gesture) to an influence or effect on the sUAS swarm. The goal of our work is to create an intuitive interface for a human supervisor to influence or guide an sUAS swarm without excessive incursions on decentralized control afforded by these systems, while attempting to create more predictable behaviors. This is a potentially valuable approach that can enable the fully utilization of swarm capabilities, while also retaining an ongoing macroscopic-level of swarm control in scenarios where focus on specific regions of interest is required (e.g., search and rescue, surveillance operations) [9]. The influence mechanism has been implemented and tested using 16 drones in a photo-realistic virtual reality (VR) environment (as shown in Figure 1).


Learning Emergent Behavior in Robot Swarms with NEAT

Rajbhandari, Pranav, Sofge, Donald

arXiv.org Artificial Intelligence

When researching robot swarms, many studies observe complex group behavior emerging from the individual agents' simple local actions. However, the task of learning an individual policy to produce a desired emergent behavior remains a challenging and largely unsolved problem. We present a method of training distributed robotic swarm algorithms to produce emergent behavior. Inspired by the biological evolution of emergent behavior in animals, we use an evolutionary algorithm to train a 'population' of individual behaviors to approximate a desired group behavior. We perform experiments using simulations of the Georgia Tech Miniature Autonomous Blimps (GT-MABs) aerial robotics platforms conducted in the CoppeliaSim simulator. Additionally, we test on simulations of Anki Vector robots to display our algorithm's effectiveness on various modes of actuation. We evaluate our algorithm on various tasks where a somewhat complex group behavior is required for success. These tasks include an Area Coverage task, a Surround Target task, and a Wall Climb task. We compare behaviors evolved using our algorithm against 'designed policies', which we create in order to exhibit the emergent behaviors we desire.


Leveraging Human Feedback to Evolve and Discover Novel Emergent Behaviors in Robot Swarms

Mattson, Connor, Brown, Daniel S.

arXiv.org Artificial Intelligence

Robot swarms often exhibit emergent behaviors that are fascinating to observe; however, it is often difficult to predict what swarm behaviors can emerge under a given set of agent capabilities. We seek to efficiently leverage human input to automatically discover a taxonomy of collective behaviors that can emerge from a particular multi-agent system, without requiring the human to know beforehand what behaviors are interesting or even possible. Our proposed approach adapts to user preferences by learning a similarity space over swarm collective behaviors using self-supervised learning and human-in-the-loop queries. We combine our learned similarity metric with novelty search and clustering to explore and categorize the space of possible swarm behaviors. We also propose several general-purpose heuristics that improve the efficiency of our novelty search by prioritizing robot controllers that are likely to lead to interesting emergent behaviors. We test our approach in simulation on two robot capability models and show that our methods consistently discover a richer set of emergent behaviors than prior work. Code, videos, and datasets are available at https://sites.google.com/view/evolving-novel-swarms.


Teaching robots to be team players with nature

#artificialintelligence

This en masse behavior by individual organisms can provide separate and collective good, such as improving chances of successful mating propagation or providing security. Now, researchers have harnessed the self-organization skills required to reap the benefits of natural swarms for robotic applications in artificial intelligence, computing, search and rescue, and much more. They published their method on Aug. 3 in Intelligent Computing. "Designing a set of rules that, once executed by a swarm of robots, results in a specific desired behavior is particularly challenging," said corresponding author Marco Dorigo, professor in the artificial intelligence laboratory, named IRIDIA, of the Université Libre de Bruxelles, Belgium. "The behavior of the swarm is not a one-to-one map with simple rules executed by individual robots, but rather results from the complex interactions of many robots executing the same set of rules."